448 research outputs found
Signal inference in radio astronomy
Diese Dissertation befasst sich mit dem Rekonstruieren von unvollständig gemessenen Signalen in der Radioastronomie. Es werden zwei bildgebende Algorithmen entwickelt, die im Formalismus der Informationsfeldtheorie hergeleitet werden. Beide basieren auf dem Prinzip der Bayesischen Analyse, die Informationen aus der unvollständigen Messung werden dabei durch a priori Informationen ergänzt. Hierfür werden beide Informationsquellen in Form von Wahrscheinlichkeitsdichten formuliert und zu einer a posteriori Wahrscheinlichkeitsdichte zusammengeführt. Die a priori Informationen werden dabei minimal gehalten und beschränken sich auf die Annahme, dass das ursprüngliche Signal bezüglich des Ortes nicht beliebig stark fluktuiert. Dies erlaubt eine statistische Abschätzung des ursprünglichen Signales auf allen Größenskalen.
Der erste bildgebende Algorithmus errechnet eine Abschätzung der dreidimensionalen freien Elektronendichte im interstellaren Medium der Milchstraße aus Dispersionsmessungen von Pulsaren. Die Dispersion der Radiostrahlung eines Pulsars ist proportional zu der Gesamtanzahl der freien Eletronen auf der Sichtlinie zwischen Pulsar und Beobachter. Jede gemessene Sichtlinie enthält somit Informationen über die Verteilung von freien Elektronen im Raum. Das Rekonstruktionsproblem ist damit ein Tomographieproblem ähnlich dem in der medizinischen Bildgebung. Anhand einer Simulation wird untersucht, wie detailliert die Elektronendichte mit Daten des noch im Bau befindlichen Square Kilometre Array (SKA) rekonstruiert werden kann. Die Ergebnisse zeigen, dass die großen Strukturen der freien Elektronendichte der Milchstraße mit den Daten des SKA rekonstruiert werden können. Der zweite bildgebende Algorithmus trägt den Namen fastResolve und rekonstruiert die Intensität von Radiostrahlung anhand von Messdaten eines Radiointerferometers. fastResolve baut auf dem bestehenden Algorithmus Resolve auf. fastResolve erweitert dessen Funktionalität um die separate Abschätzung von Punktquellen und rekonstruiert simultan auch die Messunsicherheit. Gleichzeitig ist fastResolve etwa 100 mal schneller. Ein Vergleich des Algorithmus’ mit CLEAN, dem Standardalgorithmus in der Radioastronomie, wird anhand von Beobachtungsdaten des Galaxienhaufens Abell 2199, aufgenommen mit dem Very Large Array, durchgeführt. fastResolve kann feinere Details des Intensitätsverlaufs rekonstruieren als CLEAN. Gleichzeitig erzeugt fastResolve weniger Artefakte wie negative Intensität. Außerdem liefert fastResolve eine Abschätzung der Rekonstruktionsunsicherheit. Diese ist wichtig für die wissenschaftliche Weiterverarbeitung und kann mit CLEAN nicht errechnet werden.
Weiterhin wird ein Verfahren entwickelt, mit dem die Leistungsspektren von Gaußschen Feldern und die von log-normal Feldern ineinander umgewandelt werden können. Dieses ermöglicht die Verjüngung des Leistungsspektrums der großskaligen Dichtestruktur des Universums, was durch Vergleiche mit einer störungstheoretischen Methode und einem kosmischen Emulator validiert wird.This dissertation addresses the problem of inferring a signal from an incomplete measurement in the field of radio astronomy. Two imaging algorithms are developed within the framework of information field theory. Both are based on Bayesian analysis; information from the incomplete measurement is complemented by a priori information. To that end both sources of information are formulated as probability distributions and merged to an a posteriori probability distribution. The a priori information is kept minimal. It reduces to the assumption that the real signal does not fluctuate arbitrarily strong with respect to position. This construction allows for a statistical estimation of the original signal on all scales.
The first imaging algorithm calculates a three-dimensional map of the Galactic free electron density using dispersion measure data from pulsars. The dispersion of electromagnetic waves in the radio spectrum that a pulsar emits is proportional to the total number of free electrons on the line of sight between pulsar and observer. Therefore, each measured line of sight contains information about the distribution of free electrons in space. The reconstruction problem is a tomography problem similar to the one in medical imaging. We investigate which level of detail of the free electron density can be reconstructed with data of the upcoming Square Kilometre Array (SKA) by setting up a simulation. The results show that the large-scale features free electron density of the Milky Way will be reconstructible with the SKA.
The second imaging algorithm is named fastResolve. It reconstructs the radio intensity of the sky from interferometric data. fastResolve is based on Resolve, but adds the capability to separate point sources and to estimate the measurement uncertainty. Most importantly, it is 100 times faster. A comparison of the algorithm with CLEAN, the standard imaging method for interferometric data in radio astronomy, is performed using observational data of the galaxy cluster Abell 2199 recorded with the Very Large Array. fastResolve reconstructs finer details than CLEAN while introducing fewer artifacts such as negative intensity. Furthermore, fastResolve provides an uncertainty map. This quantity is important for proper scientific use of the result, but is not available using CLEAN.
Furthermore, a formalism is developed, which allows the conversion of power spectra of Gaussian fields into the power spectra of log-normal fields and vice versa. This allows the rejuvenation of the power spectrum of the large-scale matter distribution of the Universe. We validate the approach by comparison with a perturbative method and a cosmic emulator
Bayesian weak lensing tomography: Reconstructing the 3D large-scale distribution of matter with a lognormal prior
We present a Bayesian reconstruction algorithm that infers the
three-dimensional large-scale matter distribution from the weak gravitational
lensing effects measured in the image shapes of galaxies. The algorithm is
designed to also work with non-Gaussian posterior distributions which arise,
for example, from a non-Gaussian prior distribution. In this work, we use a
lognormal prior and compare the reconstruction results to a Gaussian prior in a
suite of increasingly realistic tests on mock data. We find that in cases of
high noise levels (i.e. for low source galaxy densities and/or high shape
measurement uncertainties), both normal and lognormal priors lead to
reconstructions of comparable quality, but with the lognormal reconstruction
being prone to mass-sheet degeneracy. In the low-noise regime and on small
scales, the lognormal model produces better reconstructions than the normal
model: The lognormal model 1) enforces non-negative densities, while negative
densities are present when a normal prior is employed, 2) better traces the
extremal values and the skewness of the true underlying distribution, and 3)
yields a higher pixel-wise correlation between the reconstruction and the true
density.Comment: 23 pages, 12 figures; updated to match version accepted for
publication in PR
Dynamic system classifier
Stochastic differential equations describe well many physical, biological and
sociological systems, despite the simplification often made in their
derivation. Here the usage of simple stochastic differential equations to
characterize and classify complex dynamical systems is proposed within a
Bayesian framework. To this end, we develop a dynamic system classifier (DSC).
The DSC first abstracts training data of a system in terms of time dependent
coefficients of the descriptive stochastic differential equation. Thereby the
DSC identifies unique correlation structures within the training data. For
definiteness we restrict the presentation of DSC to oscillation processes with
a time dependent frequency {\omega}(t) and damping factor {\gamma}(t). Although
real systems might be more complex, this simple oscillator captures many
characteristic features. The {\omega} and {\gamma} timelines represent the
abstract system characterization and permit the construction of efficient
signal classifiers. Numerical experiments show that such classifiers perform
well even in the low signal-to-noise regime.Comment: 11 pages, 8 figure
Signal inference with unknown response: Calibration-uncertainty renormalized estimator
The calibration of a measurement device is crucial for every scientific
experiment, where a signal has to be inferred from data. We present CURE, the
calibration uncertainty renormalized estimator, to reconstruct a signal and
simultaneously the instrument's calibration from the same data without knowing
the exact calibration, but its covariance structure. The idea of CURE,
developed in the framework of information field theory, is starting with an
assumed calibration to successively include more and more portions of
calibration uncertainty into the signal inference equations and to absorb the
resulting corrections into renormalized signal (and calibration) solutions.
Thereby, the signal inference and calibration problem turns into solving a
single system of ordinary differential equations and can be identified with
common resummation techniques used in field theories. We verify CURE by
applying it to a simplistic toy example and compare it against existent
self-calibration schemes, Wiener filter solutions, and Markov Chain Monte Carlo
sampling. We conclude that the method is able to keep up in accuracy with the
best self-calibration methods and serves as a non-iterative alternative to it
Improving self-calibration
Response calibration is the process of inferring how much the measured data
depend on the signal one is interested in. It is essential for any quantitative
signal estimation on the basis of the data. Here, we investigate
self-calibration methods for linear signal measurements and linear dependence
of the response on the calibration parameters. The common practice is to
augment an external calibration solution using a known reference signal with an
internal calibration on the unknown measurement signal itself. Contemporary
self-calibration schemes try to find a self-consistent solution for signal and
calibration by exploiting redundancies in the measurements. This can be
understood in terms of maximizing the joint probability of signal and
calibration. However, the full uncertainty structure of this joint probability
around its maximum is thereby not taken into account by these schemes.
Therefore better schemes -- in sense of minimal square error -- can be designed
by accounting for asymmetries in the uncertainty of signal and calibration. We
argue that at least a systematic correction of the common self-calibration
scheme should be applied in many measurement situations in order to properly
treat uncertainties of the signal on which one calibrates. Otherwise the
calibration solutions suffer from a systematic bias, which consequently
distorts the signal reconstruction. Furthermore, we argue that non-parametric,
signal-to-noise filtered calibration should provide more accurate
reconstructions than the common bin averages and provide a new, improved
self-calibration scheme. We illustrate our findings with a simplistic numerical
example.Comment: 17 pages, 3 figures, revised version, title change
Signal inference in radio astronomy
Diese Dissertation befasst sich mit dem Rekonstruieren von unvollständig gemessenen Signalen in der Radioastronomie. Es werden zwei bildgebende Algorithmen entwickelt, die im Formalismus der Informationsfeldtheorie hergeleitet werden. Beide basieren auf dem Prinzip der Bayesischen Analyse, die Informationen aus der unvollständigen Messung werden dabei durch a priori Informationen ergänzt. Hierfür werden beide Informationsquellen in Form von Wahrscheinlichkeitsdichten formuliert und zu einer a posteriori Wahrscheinlichkeitsdichte zusammengeführt. Die a priori Informationen werden dabei minimal gehalten und beschränken sich auf die Annahme, dass das ursprüngliche Signal bezüglich des Ortes nicht beliebig stark fluktuiert. Dies erlaubt eine statistische Abschätzung des ursprünglichen Signales auf allen Größenskalen.
Der erste bildgebende Algorithmus errechnet eine Abschätzung der dreidimensionalen freien Elektronendichte im interstellaren Medium der Milchstraße aus Dispersionsmessungen von Pulsaren. Die Dispersion der Radiostrahlung eines Pulsars ist proportional zu der Gesamtanzahl der freien Eletronen auf der Sichtlinie zwischen Pulsar und Beobachter. Jede gemessene Sichtlinie enthält somit Informationen über die Verteilung von freien Elektronen im Raum. Das Rekonstruktionsproblem ist damit ein Tomographieproblem ähnlich dem in der medizinischen Bildgebung. Anhand einer Simulation wird untersucht, wie detailliert die Elektronendichte mit Daten des noch im Bau befindlichen Square Kilometre Array (SKA) rekonstruiert werden kann. Die Ergebnisse zeigen, dass die großen Strukturen der freien Elektronendichte der Milchstraße mit den Daten des SKA rekonstruiert werden können. Der zweite bildgebende Algorithmus trägt den Namen fastResolve und rekonstruiert die Intensität von Radiostrahlung anhand von Messdaten eines Radiointerferometers. fastResolve baut auf dem bestehenden Algorithmus Resolve auf. fastResolve erweitert dessen Funktionalität um die separate Abschätzung von Punktquellen und rekonstruiert simultan auch die Messunsicherheit. Gleichzeitig ist fastResolve etwa 100 mal schneller. Ein Vergleich des Algorithmus’ mit CLEAN, dem Standardalgorithmus in der Radioastronomie, wird anhand von Beobachtungsdaten des Galaxienhaufens Abell 2199, aufgenommen mit dem Very Large Array, durchgeführt. fastResolve kann feinere Details des Intensitätsverlaufs rekonstruieren als CLEAN. Gleichzeitig erzeugt fastResolve weniger Artefakte wie negative Intensität. Außerdem liefert fastResolve eine Abschätzung der Rekonstruktionsunsicherheit. Diese ist wichtig für die wissenschaftliche Weiterverarbeitung und kann mit CLEAN nicht errechnet werden.
Weiterhin wird ein Verfahren entwickelt, mit dem die Leistungsspektren von Gaußschen Feldern und die von log-normal Feldern ineinander umgewandelt werden können. Dieses ermöglicht die Verjüngung des Leistungsspektrums der großskaligen Dichtestruktur des Universums, was durch Vergleiche mit einer störungstheoretischen Methode und einem kosmischen Emulator validiert wird.This dissertation addresses the problem of inferring a signal from an incomplete measurement in the field of radio astronomy. Two imaging algorithms are developed within the framework of information field theory. Both are based on Bayesian analysis; information from the incomplete measurement is complemented by a priori information. To that end both sources of information are formulated as probability distributions and merged to an a posteriori probability distribution. The a priori information is kept minimal. It reduces to the assumption that the real signal does not fluctuate arbitrarily strong with respect to position. This construction allows for a statistical estimation of the original signal on all scales.
The first imaging algorithm calculates a three-dimensional map of the Galactic free electron density using dispersion measure data from pulsars. The dispersion of electromagnetic waves in the radio spectrum that a pulsar emits is proportional to the total number of free electrons on the line of sight between pulsar and observer. Therefore, each measured line of sight contains information about the distribution of free electrons in space. The reconstruction problem is a tomography problem similar to the one in medical imaging. We investigate which level of detail of the free electron density can be reconstructed with data of the upcoming Square Kilometre Array (SKA) by setting up a simulation. The results show that the large-scale features free electron density of the Milky Way will be reconstructible with the SKA.
The second imaging algorithm is named fastResolve. It reconstructs the radio intensity of the sky from interferometric data. fastResolve is based on Resolve, but adds the capability to separate point sources and to estimate the measurement uncertainty. Most importantly, it is 100 times faster. A comparison of the algorithm with CLEAN, the standard imaging method for interferometric data in radio astronomy, is performed using observational data of the galaxy cluster Abell 2199 recorded with the Very Large Array. fastResolve reconstructs finer details than CLEAN while introducing fewer artifacts such as negative intensity. Furthermore, fastResolve provides an uncertainty map. This quantity is important for proper scientific use of the result, but is not available using CLEAN.
Furthermore, a formalism is developed, which allows the conversion of power spectra of Gaussian fields into the power spectra of log-normal fields and vice versa. This allows the rejuvenation of the power spectrum of the large-scale matter distribution of the Universe. We validate the approach by comparison with a perturbative method and a cosmic emulator
All-sky reconstruction of the primordial scalar potential from WMAP temperature data
An essential quantity required to understand the physics of the early
Universe, in particular the inflationary epoch, is the primordial scalar
potential and its statistics. We present for the first time an all-sky
reconstruction of with corresponding -uncertainty from WMAP's
cosmic microwave background (CMB) temperature data -- a map of the very early
Universe right after the inflationary epoch. This has been achieved by applying
a Bayesian inference method that separates the whole inverse problem of the
reconstruction into many independent ones, each of them solved by an optimal
linear filter (Wiener filter). In this way, the three-dimensional potential
gets reconstructed slice by slice resulting in a thick shell of nested
spheres around the comoving distance to the last scattering surface. Each slice
represents the primordial scalar potential projected onto a sphere with
corresponding distance. Furthermore, we present an advanced method for
inferring and its power spectrum simultaneously from data, but argue
that applying it requires polarization data with high signal-to-noise levels
not available yet. Future CMB data should improve results significantly, as
polarization data will fill the present blind gaps of the
reconstruction
Sharpening up Galactic all-sky maps with complementary data - A machine learning approach
Galactic all-sky maps at very disparate frequencies, like in the radio and
-ray regime, show similar morphological structures. This mutual
information reflects the imprint of the various physical components of the
interstellar medium. We want to use multifrequency all-sky observations to test
resolution improvement and restoration of unobserved areas for maps in certain
frequency ranges. For this we aim to reconstruct or predict from sets of other
maps all-sky maps that, in their original form, lack a high resolution compared
to other available all-sky surveys or are incomplete in their spatial coverage.
Additionally, we want to investigate the commonalities and differences that the
ISM components exhibit over the electromagnetic spectrum. We build an
-dimensional representation of the joint pixel-brightness distribution of
maps using a Gaussian mixture model and see how predictive it is: How well
can one map be reproduced based on subsets of other maps? Tests with mock data
show that reconstructing the map of a certain frequency from other frequency
regimes works astonishingly well, predicting reliably small-scale details well
below the spatial resolution of the initially learned map. Applied to the
observed multifrequency data sets of the Milky Way this technique is able to
improve the resolution of, e.g., the low-resolution Fermi LAT maps as well as
to recover the sky from artifact-contaminated data like the ROSAT 0.855 keV
map. The predicted maps generally show less imaging artifacts compared to the
original ones. A comparison of predicted and original maps highlights
surprising structures, imaging artifacts (fortunately not reproduced in the
prediction), and features genuine to the respective frequency range that are
not present at other frequency bands. We discuss limitations of this machine
learning approach and ideas how to overcome them
- …